Search Results: "Raphael Geissert"

13 February 2013

Raphael Geissert: A bashism a week: negative matches

Probably due to the popular way of expressing the negation of a character class in regular expressions, it is common to see negative patterns such as [^b] in shell scripts.

However, using an expression such as [^b] where the shell is the one processing the pattern will cause trouble with shells that don't support that extension. The right way to express the negation is using an exclamation mark, as in: [!b]

Big fat note: this only applies to patterns that the shell is responsible for processing. Some of such cases are:

case foo in
[!m]oo)
echo bar
;;
esac
and
# everything but backups:
for file in documents/*[!~]; do
echo doing something with "$file" ...
done

If the pattern is processed by another program, beware that most won't interpret the exclamation the way the shell does. E.g.

$ printf "foo\nbar\nbaz\n"   grep '^[^b]'
foo
$ printf "foo\nbar\nbaz\n" grep '^[!b]'
bar
baz

6 February 2013

Raphael Geissert: A bashism a week: short-circuiting tests

The test/[ command is home to several bashisms, but as I believe I have demonstrated: incompatible behaviour is to be expected.

The "-a" and "-o" binary logical operators are no exception, even if documented by the Debian Policy Manual.

One feature of writing something like the following code, is that upon success of the first command, the second won't be executed: it will be short-circuited.
[ -e /dev/urandom ]   [ -e /dev/random ]

Now, using the "-a" or "-o" bashisms even in shell interpreters that support them can result in unexpected behaviour: some interpreters will short-circuit the second test, others won't.

For example, bash doesn't short-circuit:
$ strace bash -c '[ -e /dev/urandom -o -e /dev/random ]' 2>&1 grep /dev
stat64("/dev/urandom", ...) = 0
stat64("/dev/random", ...) = 0
Neither does dash:
$ strace dash -c '[ -e /dev/urandom -o -e /dev/random ]' 2>&1 grep /dev
stat64("/dev/urandom", ...) = 0
stat64("/dev/random", ...) = 0
But posh does:
$ strace posh -c '[ -e /dev/urandom -o -e /dev/random ]' 2>&1 grep /dev
stat64("/dev/urandom", ...) = 0
And so does pdksh:
$ strace pdksh -c '[ -e /dev/urandom -o -e /dev/random ]' 2>&1 grep /dev
stat64("/dev/urandom", ...) = 0

output of strace redacted for brevity

So even in Debian, where the feature can be expected to be implemented, its semantics are not very well defined. So much for using this bashism... better avoid it.

Remember, if you rely on any non-standard behaviour or feature make sure you document it and, if feasible, check for it at run-time.

30 January 2013

Raphael Geissert: A bashism a week: sleep

To delay execution of some commands in a shell script, the sleep command comes handy.
Even though many shells do not provide it as a built-in and the GNU sleep command is used, there are a couple of things to note:


This of course is regarding what is required by POSIX:2001; it only requires the sleep command to take an unsigned integer. FreeBSD's sleep command does accept fractions of seconds, for example.

Remember, if you rely on any non-standard behaviour or feature make sure you document it and, if feasible, check for it at run-time.

In this case, since the sleep command is not required to be a built-in, it does not matter what shell you specify in your script's shebang. Moreover, calling /bin/sleep doesn't guarantee you anything. The exception is if you specify a shell that has its own sleep built-in, then you could probably rely on it.

The easiest replacement for suffixes is calculating the desired amount of time in seconds. As for the second case, you may want to reconsider your use of a shell script.

23 January 2013

Raphael Geissert: A bashism a week: output redirection

Redirecting stdout and stderr to the same file or file descriptor with &> is common and nice, except that it is not required to be supported by POSIX:2001. Moreover, trying to use it with shells not supporting it will do exactly the opposite:

  1. The command's output (to stdout and stderr) won't be redirected anywhere.
  2. The command will be executed in the background.
  3. The file will be truncated, if redirecting to a file and not using >>.

Are the characters saved worth those effects? I don't think so. Just use this instead: "> file 2>&1". Make sure you get the first redirection right, "&> file 2>&1" isn't going to do the trick.

22 January 2013

Raphael Geissert: The death of the netbooks?

It's been over four years since I bought my ASUS Eee PC 1000h. I have used it almost daily ever since. Back when I bought it new models from different brands were being released every few months due to the netbooks hype.

In spite of being resource-limited due to its 1.60 GHz Atom CPU and only 1 GB of RAM, I've managed to do pretty much everything with it. Building software is slow and watching HD videos is nearly impossible, even more so when streamed from the internet and played with flash. Its limited memory capacity makes the kernel swap tens of megabytes before the KDE4 desktop is fully loaded. After launching some day-to-day applications there are usually hundreds of MBs in swap.

In spite of all this, I run the KDE4 desktop and have been able to do things such as running up to two Debian virtual machines with several services (apache httpd, mysql server, openldapd, squid, etc.) and a Windows XP one all at the same time, under virtualbox. I could have probably booted another Debian virtual machine but that would have most likely rendered the DE unusable. Oh, and did I say that this is under the "VT-x"-less N270 CPU?

This so-called netbook has proved to be rock-solid. Every component is still fully functional except for its 7-hours lasting battery that didn't stand a full year of day-to-day use. The keyboard is still intact and so is everything else.

Last year I thought I was going to have to seriously consider buying a replacement after seeing what I think are some signs of the end of its life: After a routinely deep cleanup the keyboard stopped working properly, to the point that I couldn't even login because half the keyboard would send the signal of a totally unrelated key. I bought an external, but still small, USB keyboard which I used until after the next deep-cleanup somehow made the built-in keyboard work again.

The second sign came soon after the keyboard issue. The AC adapter was, well, no longer supplying power to the machine. Trying to buy one online proved to be futile. Replacement supplies for ASUS equipment are hard to find here in Mexico and importing them from the US results in the item being twice (or more) as expensive due to import taxes. They are even more expensive when one finally adds up the cost of shipping.

Hopefully, after spending some hours hunting down the failure in the adapter it turned out to be a problem with the wires. Cut wires repaired, the adapter was working again. The unit itself wasn't at fault.

Back to 2013, this netbook is ageing and every time I've looked at potential replacements I've found none that I like. I look for another netbook/ultrabook/laptop/whatever that is rock-solid, with a 10.1" or 11" display, and has a similarly compact but not oh-so-small-that-I-can't-even-type-by-only-using-my-fingertips keyboard.

The only devices that have caught my eye are the ASUS transformers (with the dock). I'm not interested in a device that only has 1 GB of memory and something between 32 to 64 GB of storage, however. I'm limited enough with my eee's 160GB hdd.

For my needs, the pre-installed Android would have to go away and I guess it would be fun to get a transformer to run under a standard Debian linux kernel. Since I'm not interested in doing that kind of kernel work the transformers are out of the question.

Based on this I think I can only partially agree with Russell Coker when he states that
If tablet computers with hardware keyboards replace traditional Netbooks that's not really killing Netbooks but introducing a new version of the same thing.
Tablets with hardware keyboards may, perhaps, be the next generation of the less than 10" netbooks, but to date I've yet to see something with a display smaller than 13" that is an upgrade over the 1000h Eee I own.

21 January 2013

Raphael Geissert: January's Debian mirrors update

It's been slightly over a month since December's update to http.debian.net. Since then, Debian's mirrors network has grown by 6 more archive mirrors. Many thanks to the Debian sponsors running them!

There are now about 370 archive mirrors serving it over http, an increase of 40 (12%) since April last year. The number of backports mirrors is now at 82, and 25 for archive.debian.org.

On the http.debian.net front there haven't been many changes since last month. Some major changes are in the works, but they didn't make it into January's code update. There were, however, a few issues with one of the hosts during the first couple of days of January. Apologies for the inconveniences it may have caused.

A new version of ftpsync addressing some issues should hopefully be released some time next month. Stay tuned to the debian-mirrors mailing list for a call for testers and probably a survey for mirror administrators.

16 January 2013

Raphael Geissert: A bashism a week: ulimit

Setting resource limits from a shell script is commonly done with the ulimit command.
Shells provide it as a built-in, if they provide it at all. As far as I know, there is no non-built-in ulimit command. One could be implemented with the Linux-specific prlimit system call, but even that requires a fairly "recent" kernel version (circa 2010).

Depending on the kind of resource you want to limit, you may get away with what some shells such as dash provide: CPU, FSIZE, DATA, STACK, CORE, RSS, MEMLOCK, NPROC, NOFILE, AS, LOCKS. I.e. options tfdscmlpnvw, plus H for the hard limit, S for the soft limit, and a for all. Bash allows other resources to be limited.

Remember, if you rely on any non-standard behaviour or feature make sure you document it and, if feasible, check for it at run-time. ulimit is not required by POSIX:2001 to be implemented for the shell.

10 January 2013

Raphael Geissert: Email security vs. MUAs

So much for security, when using a "smart" MUA

Also note that the https URL is now http due to the email security best practice.

9 January 2013

Raphael Geissert: A bashism a week: brace expansion

Brace expansion is well known and handy, but sadly it is not required by POSIX:2001. Shells that don't support it will simply and silently leave it as is.

If you use it to shorten commands, as in "echo Debian GNU/ Linux,kFreeBSD ", you have to spell it out or use some sort of loop.

When using brace expansion for sequences you will usually have to fall back to using the seq command or using loops. " 1..9 " can be replaced with "seq -s ' ' 1 9", " 1..9..2 " to "seq -s ' ' 1 2 9", and so on.
If you use brace expansion for sequences of characters then seq won't be of much help.

I must note that the seq command is not required by POSIX:2001, however.

Remember, if you rely on any non-standard behaviour or feature make sure you document it and, if feasible, check for it at run-time.

2 January 2013

Raphael Geissert: A bashism a week: read

Whether for interacting with the caller, for reading the output of some command, or a file descriptor in general, the read shell command can be found in many scripts.

Unless you stick to the POSIX:2001-required "read variable_name", possibly with the -r option, you should expect problems.


dash, for instance, supports prompts but nothing else.

Remember, if you rely on any non-standard behaviour or feature make sure you document it and, if feasible, check for it at run-time.

24 December 2012

Raphael Geissert: A bashism a week: taking a break

Short notice: due to holidays and people, rightfully, not paying much attention to the online world, this Wednesday there won't be a post from the "a bashism a week" series.

Enjoy the break.

19 December 2012

Raphael Geissert: A bashism a week: testing for equality

Well known, yet easy to find just about everywhere: using the "test"/"[" commands to test for equality with two equals signs (==).

Contrary to many programming languages, if you want to test for equality in a shell script you must only use the equals sign once.

Try to keep this in mind: under a shell that implements what is required by POSIX:2001, you may hit the unexpected in the following code.

if [ foo == foo ]; then
echo expected
else
echo unexpected
fi

18 December 2012

Raphael Geissert: Nicer, but stricter

Lately I've been working on making the redirector nicer to the mirrors and to some potential users. More specifically, those behind a caching proxy.

The redirector is now nicer to traditional web proxies by redirecting to objects that are known not to change with a "moved permanently" code (HTTP status code 301.) This applies to files in the pool/ directory and ".pdiff" files, among others.
Previously, a traditional caching web proxy would sometimes end up with multiple copies of the same object, fetched from different mirrors; and the redirection would not be cached at all. With this change, this is no longer the case.

Using a caching proxy that is aware of the Debian repository design is still more likely to yield better results, however: If my memory serves correctly, apt-cacher has the ability of updating the Packages, Sources, and similar files with the ".pdiff"s on the server side. Apt-Cacher-NG apparently can use debdelta, and so on.
Check my blog post about one APT caching proxy not being efficient for some comments related to those tools.

Another recent change is that mirrors that can't be used by the redirector will no longer be monitored as often as the other mirrors. For instance, if a mirror doesn't generate a trace file (used for monitoring) then the redirector will gradually limit the rate at which the mirror is checked.
This rate-limiting mechanism applies to different kinds of errors, and should reduce the amount of wasted time and bandwidth while still allowing automatic-detection of mirrors that recover.


Projection of a rate-limited mirror over six weeks. The mirror would have to fail in every attempt for that to happen.
N.b. there's a bump in the scale.

The rate limiter applies an initial exception to allow temporary errors to not affect the use of the mirror by the redirector. After that exception, it is pretty much linear. However, that chart doesn't really reflect the effect of the rate limiter, so put in comparison with the normal checking behaviour:


Comparison of the two behaviours over an 8 weeks period using a logarithmic scale.
Nice chart colours by Libreoffice.

The code to detect mirrors that don't perform a two-stages sync that I talked about in a previous post has not yet been integrated as the current implementation would be too expensive on the mirrors to just add it as-is.

While tracking down problems exposed to users, I decided to take a stricter approach as to what mirrors are used by the redirector. Suffice to say that the remaining mirrors using the obsolete anonftpsync are going to be ignored entirely. ftpsync has been around for a few years now and it is the standard tool.
Whether you are mirroring Debian, Raspbian, Ubuntu, or any other Debian-like packages repository, ftpsync is the right tool to use.

Most of the issues I've been discovering, and sometimes working around, affect direct users of the mirrors and are not related to the http.debian.net redirector. When not detected beforehand they happen to be exposed by the redirector, but like I said, I plan to be stricter in order to increase the redirector's reliability. Once a strict and reliable foundation is built, more workarounds might see their way in to better use the available resources.

That's it for now. The road is long, the challenge is great, and being an observer in an uncontrolled environment makes it even more interesting.

12 December 2012

Raphael Geissert: A bashism a week: $RANDOM numbers

Commonly used to sleep a random amount of time or to create unique temporary file names, $RANDOM is one of those bashisms that you are best avoiding it altogether.

It is not uncommon to see scripts generating a "unique" temporary file name with code that goes like: tempf="/tmp/foo.$RANDOM", or tempf="/tmp/foo.$$.$RANDOM".

Under some shells the "unique" temporary file name will be "/tmp/foo." for the first example code. So much for randomness, right?

Even if you go around it by defining $RANDOM to the output of cksum after reading some bytes from /dev/urandom, please: don't do that. Use the mktemp command instead.
When creating temporary files there's more than just generating a file name. Just don't do it on your own: use mktemp. Really, use it, the list of those who weren't using mktemp (or similar) is large enough as it is.

Don't even dare to mention the linux kernel-based protection against symlink attacks. There's no excuse for not using mktemp.

Tip: If you are going to use multiple temporary files, create a temporary directory instead. Use mktemp -d.
Tip: Don't reuse a temporary file's name, even if you unlink/remove it. Generate a new one with mktemp.
Tip: Reusing also means doing things like tmp="$(mktemp)"; some_command > "$tmp.stdout" 2> "$tmp.stderr"
Tip: Even if $RANDOM is not empty, don't use it. It could have been exported as an environment variable. Again, just use mktemp.

For the remaining cases where you may want a pseudo random number such as for sleeping a random number of seconds: you can use something as simple as $$. Use shell arithmetic to adjust it as needed: use the modulo operator, multiply it, etc.

If you think you need something more "random" than the process' id, then you should probably not be using $RANDOM in the first place.

5 December 2012

Raphael Geissert: Introducing: a bashism a week

No matter how many scripting programming languages exist, it appears that shell programming is here to stay around. In many cases it is fast, it "does the job", and best of all: it is available "everywhere". The shell is used by makefiles, on every call to system(), and whatnot.

However, it is a real pain, implementations differ from the standards, some implementations still in use pre-date them, they leave room for undefined behaviour, and bugs in the implementations are nothing but unknown. You can't just specify a given shell interpreter and think you've dealt with the problem. Writing shell scripts that are portable among many platforms is a nightmare, if even possible.

Surprisingly, in spite of all that, a great amount of shell scripts appear to work flawlessly in many systems.

The switch from bash to dash as the default shell interpreter in Debian wasn't done without quite some work (more if you list archived bug reports), and the work ain't over.

For the following months I will be writing about different "bashisms" every Wednesday, hopefully helping people write slightly-more-portable shell scripts. The posts are going to be focused on widely-seen bashisms, probably ignoring those that Debian's policy defines as required to be implemented.

The term "bashism" must be understood as any feature or behaviour not required by SUSv3 (aka POSIX:2001), no matter what its origins are or even if the behaviour is not exhibited by the bash shell.

One of the key points is documenting the script's requirements, starting by specifying the right shell interpreter in the shebang.

Let's see what comes out of this experiment.

As a matter of fact, I have a few months worth of posts written already. All posts are going to be published at scheduled dates, just like this very post.

3 December 2012

Raphael Geissert: Some things you wanted to know about http.debian.net

After quite a bit of, very welcome, feedback I've put together a FAQ page in an attempt to respond to the most common questions about http.debian.net.

Emails have been accumulating for a few weeks now, but I will get to them. So please be patient if you send me an email, or if you have sent me one.

17 November 2012

Raphael Geissert: Better routing, less bad apples

Another month, another update to http.debian.net. This time around most of the work was done outside the redirector's code base, as strange as it may sound.

The redirector heavily relies on the mirrors doing at least a couple of things right, for the rest it can and does compensate. When it needs to compensate, certain requests are redirected to automatically-detected good mirrors, thus avoiding mirrors that might work fine for some parts of the day but cause headaches during the rest.

So, part of the work done since the last update was to prod more mirror administrators to upgrade to the latest version of ftpsync. This reduces the number of mirrors for which compensation is needed in order to avoid errors during installations and upgrades. Hopefully, no additional work is needed for the redirector to notice the upgrades. This results in immediate improvements.

However, not all mirrors comply with the bare minimum requirements. As stated in my previous blog post, running rsync once is not enough. When mirrors break these assumptions they lead to the "bad apple" effect. The effects in this case are temporary errors, as experienced by some people. The interesting part of those issues is that the affected population may quickly change given the redirector's use of geo location and the way it creates mirror subsets.

As interesting as the distribution of the effects may be, they are not really welcome. So I put together some code to attempt to detect the bad apples. This resulted in a list of mirrors that have now been disabled in the redirector and whose administrators are going to be contacted so that they comply with the minimum requirements. Given that detection is time-sensitive, there's no 100% guarantee that all of them have been identified so far. The code to detect them will have to be adapted and integrated into the redirector's code base to be proactive on avoiding this kind of issues.

Last but not least, the redirector is now using a database of AS peers for better (re-)routing. This is the next move towards a decision making based more on network location/topology than in geographic location. This first use of a peers database is limited to IPv4 and is based on a recent routing table dump and on feedback provided by interested people. If you are a mirror or network administrator, or you are familiar with the topology of your network please drop me an email so that the redirector can make a better use of your peering agreements.

N.b. in the case of the database, the term peer may also include transit providers. It is used to refer to and establish a relationship between two AS(N)s.


Feedback is, as always, welcome. I read each and every email but it may take me some time to get to it, or reply.

31 October 2012

Raphael Geissert: rsync is not enough

Ask how can one create a Debian mirror and you will get a dozen different responses. Those who are used to mirroring content, whether it is distribution packages, software in general, documents, etc., will usually come up with one answer: rsync.

Truth is: for Debian's archive, rsync is not enough.

Due to the design of the archive, a single call to rsync leaves the mirror in an inconsistent state during most of the sync process.
Explanation is simple: index files are stored in dists/, package files are stored in pool/. Sort those directories by name (just like rsync does) and you get that the indexes will be updated before the actual packages are downloaded.

There are multiple scripts out there that do exactly that, one of them in Ubuntu's wiki. Plenty more if you search the web.

Now, addressing that issue shouldn't be so difficult, right? after all, all the index files are in dists/, so syncing in two stages should be enough. It's not that simple.

With the dists/ directory containing over 8.5GBs worth of indexes and, erm, installer files, even a two stages sync will usually leave the mirror in an inconsistent state for a while.

How about only deferring to the second stage the bare minimum?, I hear you ask.
That is the current approach, but it leads to some errors when new index files are added and used. The fact that people insist in writing their own scripts doesn't help.

Hopefully, some ideas like moving the installer stuff out of dists/ and
overhauling the repository layout are being considered. An alternative is to make the users of the mirrors more robust and fault-tolerant, but we would be talking about tenths if not hundreds of tools that would need to be improved.

In all cases, the one script that is actively maintained, is rather portable, and improved from time to time is the ftpsync script. Please, do yourself and your users a favour: don't attempt to reinvent the wheel (and forget about calling rsync just once).

29 October 2012

Paul Wise: Some thoughts on using Debian

I'm currently running Debian's rolling release (aka "testing") on my main machine and have added some stuff to make that nicer. First thing I have is configuration and package management. Since I have relatively few machines, I am using a metapackage per machine that installs some configuration files with changes that I want. The metapackages depend on packages that I need installed so that I can mark all other packages as being automatically installed. The metapackages are also useful for documenting why I have things installed. It depends on things like task-laptop from tasksel, hardware support packages, the GUI I use, games I play often and so on. My laptop does not have a CD/DVD drive so I have some metapackages to fool apt into ignoring dependencies on CD/DVD related packages I don't need. I'm building the metapackages using equivs-build and a small Makefile. I use the File: header supported by equivs-build for installing config files. I have popcon installed and enabled but I don't want it to leak the names of the metapackages so I have added a prefix to my metapackages and modified the popcon cron job to remove anything containing that prefix. I also don't want apt to ever remove the metapackages so I mark them as Important: yes and configure apt to never autoremove them.
--- /etc/cron.daily/popularity-contest~
+++ /etc/cron.daily/popularity-contest
@@ -71,8 +71,8 @@
 # try to post the report through http POST
 if [ "$SUBMITURLS" ] && [ "yes" = "$USEHTTP" ]; then
     for URL in $SUBMITURLS ; do
-   if setsid /usr/share/popularity-contest/popcon-upload \
-       -u $URL -f $POPCON 2>/dev/null ; then
+   if grep -v myprefix- $POPCON   setsid /usr/share/popularity-contest/popcon-upload \
+       -u $URL 2>/dev/null ; then
        SUBMITTED=yes
    else
        logger -t popularity-contest "unable to submit report to $URL."
@@ -94,7 +94,7 @@
        echo "MIME-Version: 1.0"
        echo "Content-Type: text/plain"
        echo
-       cat $POPCON
+       grep -v myprefix- $POPCON
    )   do_sendmail
    SUBMITTED=yes
     else
/etc/apt/apt.conf.d/99metapackages:
APT::NeverAutoRemove   "^myprefix-.*";  ;
I am using Raphael Geissert's mirror redirector in order to automatically use up-to-date and hopefully non-broken mirrors. Unfortunately this often causes apt to complain about hash sum mismatches and then proceed to forget about all packages. I work around this by always running apt-get update in a loop until it succeeds.
while ! apt-get update ; do sleep 1m; done
A lot of the time I need to install packages from outside of testing. So my sources.list contains lines for testing, unstable and experimental. I have some apt pinning so that by default I only have packages from testing but if I manually upgrade some packages to unstable or experimental, then I will get upgrades within that suite until those packages migrate down to unstable or testing. The apt pinning needs priorities between 1000 and 500 for this to work nicely. I also pin some things like lintian, debian-policy and devref to unstable/experimental since having old versions of those is not useful. /etc/apt/sources.list:
# testing
deb http://security.debian.org/ testing/updates main contrib non-free
deb-src http://security.debian.org/ testing/updates main contrib non-free
deb http://http.debian.net/debian/ testing main contrib non-free
deb-src http://http.debian.net/debian/ testing main contrib non-free
# unstable
deb http://http.debian.net/debian/ unstable main contrib non-free
deb-src http://http.debian.net/debian/ unstable main contrib non-free
# experimental
deb http://http.debian.net/debian/ experimental main contrib non-free
deb-src http://http.debian.net/debian/ experimental main contrib non-free
/etc/apt/preferences.d/system:
Package: *
Pin: release a=testing
Pin-Priority: 800
Package: *
Pin: release a=unstable
Pin-Priority: 700
Package: *
Pin: release a=experimental
Pin-Priority: 600
/etc/apt/preferences.d/packages:
Package: lintian
Pin: release a=unstable
Pin-Priority: 900
Package: lintian
Pin: release a=experimental
Pin-Priority: 910
Package: debian-policy
Pin: release a=unstable
Pin-Priority: 999
Package: developers-reference
Pin: release a=unstable
Pin-Priority: 999
I have a few configuration files and a cron job to make all programs dump core files when they crash so that I can file bugs, even for random crashes that are not easy to reproduce. I enabled some kernel settings with sysctl, lifted some security limits to enable core dumps, and added a cron job to delete old core dumps and notify me of new core dumps. In my shell configuration I also turn on two glibc options to cause programs to crash when they have improper memory management. I also have a second machine I use for bug discovery where I have lots of stuff installed and everything apt pinned in the opposite way; experimental > unstable > testing. When I have time I use this machine to do testing of packages I use, classes of packages that I care about (such as games) and sometimes packages I do not use. /etc/sysctl.d/corefiles.conf:
fs.suid_dumpable = 1
kernel.core_uses_pid = 1
kernel.core_pattern = /var/cache/corefiles/core-%p-%u-%g-%s-%t-%h-%e
/etc/security/limits.d/corefiles.conf:
*              soft    core            unlimited
*              hard    core            unlimited
/etc/cron.daily/corefiles:
#!/bin/sh
mkdir -p /var/cache/corefiles
chmod 2777 /var/cache/corefiles
if [ $(find /var/cache/corefiles -mtime +100 -a ! -type d   wc -l) -gt 0 ]; then
    echo deleting:
    find /var/cache/corefiles -mtime +100 -a ! -type d
    find /var/cache/corefiles -mtime +100 -a ! -type d -print0   xargs -0 rm -f
fi
if [ $(find /var/cache/corefiles ! -type d   wc -l) -gt 0 ] ; then
    echo still present:
    find /var/cache/corefiles ! -type d
fi
~/.bash.d/malloc:
export MALLOC_CHECK_=2
export MALLOC_PERTURB_=$(($RANDOM % 255 + 1))
I unfortunately need some packages from contrib/non-free, so I have a cron job to let me know when I accidentally install new packages from contrib/non-free.
@daily diffcmdoutput ~/.cache/non-free-contrib aptitude search ~i~snon-free\ ~i~scontrib
I backup my dpkg package selections and debconf databases.
@daily diffcmdoutput ~/backup/packages dpkg --get-selections
@daily diffcmdoutput ~/backup/config debconf-get-selections 2> /dev/null
I notify myself of changes to the list of new packages so that I can review them, install any useful/interesting ones and tell aptitude to forget them all.
@daily diffcmdoutput ~/.cache/new aptitude search ~N
I notify myself of changes to the list of packages I have installed that are not up-to-date packages from testing. This helps me catch packages removed from testing/unstable/etc that I use.
@daily diffcmdoutput ~/.cache/apt-show-versions sh -c "apt-show-versions   grep -v '/testing uptodate'"
I notify myself of packages that I maintain that are having issues migrating to testing. I considered doing the same for teams I am involved in but they aren't particularly functional teams so there would be a lot of noise.
@daily grep-excuses 'Paul Wise'
I notify myself of RC bugs that apply to testing and are installed. The list is so long that it just makes me depressed instead of motivated to help fix RC bugs so I only notify myself of changes. Even then I rarely do anything other than delete the notifications. If you are looking for ways to help Debian, fixing RC bugs is a great choice.
@daily diffcmdoutput ~/.cache/rcbugs rc-alert -d T --exclude-tags IP+MR
I notify myself of packages that are orphaned or need a new maintainer. There are usually so many packages in this list that it is not useful, so I only notify myself of changes to the list. I rarely adopt packages because I feel overloaded already. If you are looking for ways to help Debian, adopting packages is a good choice.
@daily wnpp-alert --diff
One of my packages is for interacting with servers on the Internet, so I need to run tests periodically to ensure the package works. I do that with a simple Makefile but maybe I need to move to autopkgtest, need to find out if it saves data between runs first.
@monthly cd ~/devel/debian/tests ; make
I install debsecan so that I get notified of security updates in unstable and new security issues that are not fixed yet. The way debsecan works is that it notifies about changes in security issues and updates and also includes a full list of all known unfixed issues. I generally install security updates from unstable when I see them. The list of unchanged issues is so long that it makes me wonder how many times I've been cracked already. The oldest issue goes back to 2002 but most of them are 2010 or later. The various parts of WebKit are by far the worst security offenders. I don't bother with the white-listing functionality due to the quantity of security issues and because it isn't possible to add a comment about each white-list item. If you want to get involved with the security team, reporting issues with the data in the security tracker is a good idea. I subscribe to the ftpmaster RSS feeds for new and removed packages to keep up to date with changes in the archive. A lot of the above applies to running systems based on Debian stable too. If you have any other thoughts about running Debian systems, please blog about them. The diffcmdoutput command used above is a simple shell script:
#!/bin/sh
cache="$1"
shift
temp="$(mktemp "$cache"XXXXXXXXXXXXXX)"
"$@" > "$temp"
diff --unified "$cache" "$temp"
mv --force "$temp" "$cache"

23 October 2012

Raphael Geissert: Where to get checkbashisms from (community service)

Lately I've been spending some time checking the Debian archive for bashisms in preparation of the release of Debian wheezy. This requires running checkbashisms against every /bin/sh script, checking the results by hand to discard some false positives, and filing bug reports of bashisms.
And of course, fixing and improving checkbashisms; some of that work to be published soon.

It is fun that when one fixes some parsing errors it leads to regressions in the form of false negatives due to other parsing errors... oh well.

However, while looking around the web for references about checkbashisms, I noticed that somebody created a sourceforge project under that same name. It is a fork of an old version of checkbashisms, and hasn't seen an update in over a year. It even appears that a FreeBSD port is based on it.

If you are looking for the latest checkbashisms, please get it either from the latest version of devscripts, or from devscripts' git repository.

Next.

Previous.